17 research outputs found

    A Comprehensive Survey On Client Selections in Federated Learning

    Full text link
    Federated Learning (FL) is a rapidly growing field in machine learning that allows data to be trained across multiple decentralized devices. The selection of clients to participate in the training process is a critical factor for the performance of the overall system. In this survey, we provide a comprehensive overview of the state-of-the-art client selection techniques in FL, including their strengths and limitations, as well as the challenges and open issues that need to be addressed. We cover conventional selection techniques such as random selection where all or partial random of clients is used for the trained. We also cover performance-aware selections and as well as resource-aware selections for resource-constrained networks and heterogeneous networks. We also discuss the usage of client selection in model security enhancement. Lastly, we discuss open issues and challenges related to clients selection in dynamic constrained, and heterogeneous networks

    On Dependability Traffic Load and Energy Consumption Tradeoff in Data Center Networks

    Get PDF
    Mega data centers (DCs) are considered as efficient and promising infrastructures for supporting numerous cloud computing services such as online office, online social networking, Web search and IT infrastructure out-sourcing. The scalability of these services is influenced by the performance and dependability characteristics of the DCs. Consequently, the DC networks are constructed with a large number of network devices and links in order to achieve high performance and reliability. As a result, these requirements increase the energy consumption in DCs. In fact, in 2010, the total energy consumed by DCs was estimated to be about 120 billion Kilowatts of electricity in 2012, which is about 2.8% of the total electricity bill in the USA. According to industry estimates, the USA data center market achieved almost US 39 billion in 2009, growing from US 16.2 billion in 2005. One of the primary reasons behind this issue is that all the links and devices are always powered on regardless of the traffic status. The statistics showed that the traffic drastically alternates, especially between mornings and nights, and also between working days and weekends. Thus, the network utilization depends on the actual period, and generally, the peak capacity of the network is reached only in rush times. This non-proportionality between traffic load and energy consumption is caused by the fact that -most of the time- only a subset of the network devices and links can be enough to forward the data packets to their destinations while the remaining idle nodes are just wasting energy. Such observations inspired us to propose a new approach that powers off the unused links by deactivating the end-ports of each one of them to save energy. The deactivation of ports is proposed in many researches. However, these solutions have high computational complexity, network delay and reduced network reliability. In this paper, we propose a new approach to reduce the power consumption in DC. By exploiting the correlation in time of the network traffic, the proposed approach uses the traffic matrix of the current network state, and manages the state of switch ports (on/off) at the beginning of each period, while making sure to keep the data center fully connected. During the rest of each time period, the network must be able to forward its traffic through the active ports. The decision to close or open depends on a predefined threshold value; the port is closed only if the sum of the traffic generated by its connected node is less than the threshold. We also investigate the minimum period of time during which a port should not change its status. This minimum period is necessary given that it takes time and energy to switch a port on and off. Also, one of the major challenges in this work is powering off the idle devices for more energy saving while guaranteeing the connectivity of each server. So, we propose a new traffic aware algorithm that presents a tradeoff between energy saving and reliability satisfaction. For instance, in HyperFlatNet, simulation results show that the proposed approach reduces the energy consumption by 1.8*104 WU (Watt per unit of time) for a correlated network with1000-server (38 % of energy saving). In addition, and thanks to the proposed traffic aware algorithm, the new approach shows a good performance even in case of high failure rate (up to 30%) which means when one third of the links failed, the connection failure rate is only 0.7%. Both theoretical analysis and simulation experiments are conducted to evaluate and verify the performance of the proposed approach compared to the state-of-the-art techniques.qscienc

    Sparsity-aware Multiple Relay Selection In Large Decode-and-forward Relay Networks

    Get PDF
    Cooperative communication is a promising technology that has attracted significant attention recently thanks to its ability to achieve spatial diversity in wireless networks with only single-antenna nodes. The different nodes of a cooperative system can share their resources so that a virtual Multiple Input Multiple Output (MIMO) system is created which leads to spatial diversity gains. To exploit this diversity, a variety of cooperative protocols have been proposed in the literature under different design criteria and channel information availability assumptions. Among these protocols, two of the most-widely used are the amplify-and-forward (AF) and decode-and-forward (DF) protocols. However, in large-scale relay networks, the relay selection process becomes highly complex. In fact, in many applications such as device-to-device (D2D) communication networks and wireless sensor networks, a large number of cooperating nodes are used, which leads to a dramatic increase in the complexity of the relay selection process. To solve this problem, the sparsity of the relay selection vector has been exploited to reduce the multiple relay selection complexity for large AF cooperative networks while also improving the bit error rate performance. In this work, we extend the study from AF to large-scale decode-and-forward (DF) relay networks. Based on exploiting the sparsity of the relay selection vector, we propose and compare two different techniques (referred to as T1 and T2) that aim to improve the performance of multiple relay selection in large-scale decode-and-forward relay networks. In fact, when only few relays are selected from a large number of relays, the relay selection vector becomes sparse. Hence, utilizing recent advances in sparse signal recovery theory, we propose to use different signal recovery algorithms such as the Orthogonal Matching Pursuit (OMP) to solve the relay selection problem. Our theoretical and simulated results demonstrate that our two proposed sparsity-aware relay selection techniques are able to improve the outage performance and reduce the computation complexity at the same time compared with conventional exhaustive search (ES) technique. In fact, compared to ES technique, T1 reduces the selection complexity by O(K^2 N) (where N is the number of relays and K is the number of selected relays) while outperforming it in terms of outage probability irrespective of the relays' positions. Technique T2 provides higher outage probability compared to T1 but reduces the complexity making a compromise between complexity and outage performance. The best selection threshold for T2 is also theoretically calculated and validated by simulations which enabled T2 to also improve the outage probability compared with ES techniques.qscienc

    Exploiting Sparsity in Amplify-and-Forward Broadband Multiple Relay Selection

    Get PDF
    Cooperative communication has attracted significant attention in the last decade due to its ability to increase the spatial diversity order with only single-antenna nodes. However, most of the techniques in the literature are not suitable for large cooperative networks such as device-to-device and wireless sensor networks that are composed of a massive number of active devices, which significantly increases the relay selection complexity. Therefore, to solve this problem and enhance the spatial and frequency diversity orders of large amplify and forward cooperative communication networks, in this paper, we develop three multiple relay selection and distributed beamforming techniques that exploit sparse signal recovery theory to process the subcarriers using the low complexity orthogonal matching pursuit algorithm (OMP). In particular, by separating all the subcarriers or some subcarrier groups from each other and by optimizing the selection and beamforming vector(s) using OMP algorithm, a higher level of frequency diversity can be achieved. This increased diversity order allows the proposed techniques to outperform existing techniques in terms of bit error rate at a lower computation complexity. A detailed performance-complexity tradeoff, as well as Monte Carlo simulations, are presented to quantify the performance and efficiency of the proposed techniques. 2013 IEEE.This publication was made possible by NPRP grant 8-627-2-260 and NPRP grant 6-070-2-024 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.Scopu

    A Secure Energy Efficient Scheme for Cooperative IoT Networks

    No full text
    A secure energy efficient approach is proposed to connect Internet of Things (IoT) sensors that operate with limited power resources. This is done by optimizing simultaneously the energy efficiency, the communication rate and the network security while limiting the potential data leakage and tracking the finite battery status evolution. The proposed model uses spatial diversity in addition to artificial jamming introduced by an intermediate device to forward the data from the sensors to the destination and to secure the communication links without draining the rechargeable batteries. The energy harvested by the source is also maximized without affecting the security level of the network. The outage secrecy capacity is derived to evaluate the security level. Furthermore, the system power stability is analyzed using Markov chains and statistical approaches to validate the efficiency of the proposed technique in maintaining the system in a self-sufficient mode and making it operate without the assistance of external power resources

    Game Theory for Anti-Jamming Strategy in Multichannel Slow Fading IoT Networks

    No full text
    The open nature of the wireless communication medium renders it vulnerable to jamming attacks by malicious users. To detect their presence and to avoid such attacks, several techniques are present in the literature. Most of these techniques aim to reduce the effect of the jamming signals by increasing the transmission power or by using complex coordination schemes. However, the implementation of such power consuming techniques might be challenging or not feasible in limited resources Internet-of-Things (IoT) devices. Therefore, a defending strategy against jamming attacks in health monitoring IoT networks is proposed in this article. This strategy operates in orthogonal frequency-division multiplexing channels and takes into consideration the effect of slow fading channels in the strategy design. Specifically, the jamming combating problem is formulated as a Colonel Blotto game where the equilibrium defines the minimization of the worst case jamming effect on the IoT sensors communications. Then, the optimal power allocation strategy for all the potential jammer power ranges is derived by investigating the Nash equilibrium of the game. This proposed strategy is shown to be efficient in combating jamming attacks while minimizing the IoT sensors power consumption.This work was supported by the NPRP Award under Grant NPRP 10-1205-160012 from the Qatar National Research Fun

    Secure and Privacy-Preserving Federated Learning-Based Resource Allocation for Next Generation Networks

    No full text
    This paper surveys recent research on Federated Learning-based resource allocation for next-generation networks with the aim of identifying research gaps and potential directions for future work. We start by outlining the main challenges and requirements for secure and privacy-preserving resource allocation in these networks. The existing solutions and techniques proposed in literature are then reviewed. The strengths and limitations of these solutions are analyzed and trade-offs in terms of security, privacy, efficiency, and scalability are discussed. The research gaps and promising directions for future research are identified, such as integrating multiple solutions (e.g., Game theory or optimization techniques with Federated Learning) and developing new models and algorithms for AI-based resource allocation that addresses specific challenges and requirements of next-generation networks. The goal is to inspire further research in this critical and rapidly evolving field. </p

    Accelerated IoT Anti-Jamming: A Game Theoretic Power Allocation Strategy

    No full text
    A jamming combating power allocation strategy is proposed to secure the data communication in IoT networks. The proposed strategy aims to minimize the worst case jamming effect on the intended transmission under multi channel fading and total power constraints by modelling the problem as a Colonel Blotto game Nash Equibrium (NE). Both Logistic Regression as well as a specifically designed algorithm are used to iteratively and rapidly obtain the equilibrium strategy. The conducted theoretical derivations and Monte Carlo simulations confirm that the proposed approach can secure the IoT network with a limited amount of power and with a number of iterations that is much reduced compared to state-of-the-art techniques

    Efficient techniques for energy saving in data center networks

    No full text
    International audienceData centers are constructed with a huge number of network devices to support the expanding cloud based services. These devices are used to achieve the highest performance in case of full utilization of the network. However, the peak capacity of the network is rarely reached. Consequently, many devices are set into idle state and cause a huge energy waste leading to a non-proportionality between the network load and the energy consumed. In this paper, we propose a new approach to improve the efficiency of data centers in terms of energy consumption. Our approach exploits the correlation in time of the inter-node communication traffic and some topological features to maximize energy saving with only a minor increase in the average path length. Our approach dynamically controls the number of active communication links by turning off and on ports in the network (switches ports and nodes ports). Simulations results confirmed the energy saving gain procured by the proposed approach with a low impact on the average path length

    A Novel Pandemic Tracking Map: From Theory to Implementation

    Get PDF
    The wide spread of the novel COVID-19 virus all over the world has caused major economical and social damages combined with the death of more than two million people so far around the globe. Therefore, the design of a model that can predict the persons that are most likely to be infected is a necessity to control the spread of this infectious disease as well as any other future novel pandemic. In this paper, an Internet of Things (IoT) sensing network is designed to anonymously track the movement of individuals in crowded zones through collecting the beacons of WiFi and Bluetooth devices from mobile phones to triangulate and estimate the locations of individuals inside buildings without violating their privacy. A mathematical model is presented to compute the expected time of exposure between users. Furthermore, a virus spread mathematical model as well as iterative spread tracking algorithms are proposed to predict the probability of individuals being infected even with limited data
    corecore